23 research outputs found

    Visualizaci贸n Ilustrativa de Fibras Cerebrales

    Get PDF
    Projecte final de m脿ster realitzat en col.laboraci贸 amb Universitat de Tecnologia de EindhovenCastellano: Di usion Tensor Imaging (DTI) es una t茅cnica de imagen basada en la resonancia magn茅tica (MR), que ofrece una visi贸n 煤nica de la organizaci贸n estructural de la sustancia blanca del cerebro. Esto se consigue mediante la medici贸n de la difusi贸n de mol茅culas de agua en el tejido. En el agua, la difusi贸n es libre y tiene la misma magnitud en todas las direcciones. En este escenario obtenemos un per l de difusi贸n isotr贸pico. En los tejidos brosos sin embargo, como en la materia blanca del cerebro, la difusi贸n se limitar铆a en sentido perpendicular a las bras. Esto provoca que, en este escenario, obtengamos un per l de difusi贸n anisotr贸pico. En las im谩genes DTI, el per l de difusi贸n se modela como una distribuci贸n de probabilidad de Gauss y por lo tanto puede ser descrito por un tensor de segundo orden [BPD94]. En este modelo, el principal eigenvector del tensor corresponde a la direcci贸n de mayor difusi贸n, que es la misma que sigue la estructura de bras (Figura 1.1). Gracias a estos tensores podemos realizar un recorrido sobre ellos e ir reconstruyendo un modelo 3D de estas bras. Esta reconstrucci贸n recibe el nombre de ber tracking [VAD05] (Figura 1.2). Pese a su potencial, la salida de la reconstrucci贸n, ber tracking, contiene una cantidad considerable de incertidumbre. Este error o incertidumbre se acumula a lo largo del proceso de obtenci贸n del modelo. En la fase de obtenci贸n de los datos puede introducir errores por ruido en las im谩genes, distorsi贸n de la imagen, par谩metros del esc谩aner, etc. En la fase de reconstrucci贸n se introducen errores de aproximaci贸n dependiendo del modelo de difusi贸n utilizado. En la mayor parte de los algoritmos, este error no se muestra, lo cual da una sensaci贸n de certidumbre en los datos que no se corresponde con la realidad. Una aplicaci贸n para neurocirug铆a, no puede pasar por alto estos errores, ya que la aplicaci贸n se usara para tomar decisiones y evaluar el riesgo quir煤rgico. Si no mostramos esta incertidumbre a la hora de estimar la longitud de las bras cerebrales podemos provocar da~no en tejido cerebral sano.Nuestro objetivo en esta tesis es realizar un visualizado de este modelo de bras mostrando el nivel de incertidumbre que tiene cada bra o grupo de bras. El departamento de Ingenier a Biom茅dica de la Universidad de Tecnolog铆a de Eindhoven tiene desarrollada una aplicaci贸n para la visualizaci贸n de bras cerebrales (http://bmia.bmt.tue.nl/software/dtitool/). Pere-Pau V脿zquez tiene contacto con este departamento y est谩n desarrollando m茅todos de visualizaci贸n de la incertidumbre de los datos. Este trabajo se hace en colaborac铆on con ellos, en especial con Anna Vilanova y Ralph Brecheisen. Por eso, hemos implementado nuestra t茅cnica de visualizaci贸n como un plugin para esta aplicaci贸n. De esta manera, nos proporcionan muchas facilidades, como el cargado de modelos de bras o parte del proceso de visualizaci贸n implementado

    Visualizaci贸n Ilustrativa de Fibras Cerebrales

    Get PDF
    Projecte final de m脿ster realitzat en col.laboraci贸 amb Universitat de Tecnologia de EindhovenCastellano: Di usion Tensor Imaging (DTI) es una t茅cnica de imagen basada en la resonancia magn茅tica (MR), que ofrece una visi贸n 煤nica de la organizaci贸n estructural de la sustancia blanca del cerebro. Esto se consigue mediante la medici贸n de la difusi贸n de mol茅culas de agua en el tejido. En el agua, la difusi贸n es libre y tiene la misma magnitud en todas las direcciones. En este escenario obtenemos un per l de difusi贸n isotr贸pico. En los tejidos brosos sin embargo, como en la materia blanca del cerebro, la difusi贸n se limitar铆a en sentido perpendicular a las bras. Esto provoca que, en este escenario, obtengamos un per l de difusi贸n anisotr贸pico. En las im谩genes DTI, el per l de difusi贸n se modela como una distribuci贸n de probabilidad de Gauss y por lo tanto puede ser descrito por un tensor de segundo orden [BPD94]. En este modelo, el principal eigenvector del tensor corresponde a la direcci贸n de mayor difusi贸n, que es la misma que sigue la estructura de bras (Figura 1.1). Gracias a estos tensores podemos realizar un recorrido sobre ellos e ir reconstruyendo un modelo 3D de estas bras. Esta reconstrucci贸n recibe el nombre de ber tracking [VAD05] (Figura 1.2). Pese a su potencial, la salida de la reconstrucci贸n, ber tracking, contiene una cantidad considerable de incertidumbre. Este error o incertidumbre se acumula a lo largo del proceso de obtenci贸n del modelo. En la fase de obtenci贸n de los datos puede introducir errores por ruido en las im谩genes, distorsi贸n de la imagen, par谩metros del esc谩aner, etc. En la fase de reconstrucci贸n se introducen errores de aproximaci贸n dependiendo del modelo de difusi贸n utilizado. En la mayor parte de los algoritmos, este error no se muestra, lo cual da una sensaci贸n de certidumbre en los datos que no se corresponde con la realidad. Una aplicaci贸n para neurocirug铆a, no puede pasar por alto estos errores, ya que la aplicaci贸n se usara para tomar decisiones y evaluar el riesgo quir煤rgico. Si no mostramos esta incertidumbre a la hora de estimar la longitud de las bras cerebrales podemos provocar da~no en tejido cerebral sano.Nuestro objetivo en esta tesis es realizar un visualizado de este modelo de bras mostrando el nivel de incertidumbre que tiene cada bra o grupo de bras. El departamento de Ingenier a Biom茅dica de la Universidad de Tecnolog铆a de Eindhoven tiene desarrollada una aplicaci贸n para la visualizaci贸n de bras cerebrales (http://bmia.bmt.tue.nl/software/dtitool/). Pere-Pau V脿zquez tiene contacto con este departamento y est谩n desarrollando m茅todos de visualizaci贸n de la incertidumbre de los datos. Este trabajo se hace en colaborac铆on con ellos, en especial con Anna Vilanova y Ralph Brecheisen. Por eso, hemos implementado nuestra t茅cnica de visualizaci贸n como un plugin para esta aplicaci贸n. De esta manera, nos proporcionan muchas facilidades, como el cargado de modelos de bras o parte del proceso de visualizaci贸n implementado

    Improving dimensionality reduction projections for data visualization

    Get PDF
    In data science and visualization, dimensionality reduction techniques have been extensively employed for exploring large datasets. These techniques involve the transformation of high-dimensional data into reduced versions, typically in 2D, with the aim of preserving significant properties from the original data. Many dimensionality reduction algorithms exist, and nonlinear approaches such as the t-SNE (t-Distributed Stochastic Neighbor Embedding) and UMAP (Uniform Manifold Approximation and Projection) have gained popularity in the field of information visualization. In this paper, we introduce a simple yet powerful manipulation for vector datasets that modifies their values based on weight frequencies. This technique significantly improves the results of the dimensionality reduction algorithms across various scenarios. To demonstrate the efficacy of our methodology, we conduct an analysis on a collection of well-known labeled datasets. The results demonstrate improved clustering performance when attempting to classify the data in the reduced space. Our proposal presents a comprehensive and adaptable approach to enhance the outcomes of dimensionality reduction for visual data exploration.This research was funded by PID2021-122136OB-C21 from the Ministerio de Ciencia e Innovaci贸n, Spain, by 839 FEDER (EU) funds.Peer ReviewedPostprint (published version

    Enabling viewpoint learning through dynamic label generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpointqualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack ofclosed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to sepa-rate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the in驴uence of the meshquality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approachinsensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise inthis context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the labeldecision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint pre-dictions for models from different object categories and for different viewpoint qualities. Additionally, we show that predictiontimes are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality eval-uation. Code and training data is available at https://github.com/schellmi42/viewpoint_learning, whichis to our knowledge the biggest viewpoint quality dataset available.This work was supported in part by project TIN2017-88515-C2-1-R(GEN3DLIVE), from the Spanish Ministerio de Econom铆a yCompetitividad, by 839 FEDER (EU) funds.Peer ReviewedPostprint (published version

    A general illumination model for molecular visualization

    Get PDF
    Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom-based representations are the Space-filling, the Solvent Excluded Surface, the Balls-and-Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large-scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom-based molecular representations. The proposed model can be further evaluated in real-time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.Peer ReviewedPostprint (author's final draft

    Visual analysis of protein-ligand interactions

    Get PDF
    The analysis of protein-ligand interactions is complex because of the many factors at play. Most current methods for visual analysis provide this information in the form of simple 2D plots, which, besides being quite space hungry, often encode a low number of different properties. In this paper we present a system for compact 2D visualization of molecular simulations. It purposely omits most spatial information and presents physical information associated to single molecular components and their pairwise interactions through a set of 2D InfoVis tools with coordinated views, suitable interaction, and focus+context techniques to analyze large amounts of data. The system provides a wide range of motifs for elements such as protein secondary structures or hydrogen bond networks, and a set of tools for their interactive inspection, both for a single simulation and for comparing two different simulations. As a result, the analysis of protein-ligand interactions of Molecular Simulation trajectories is greatly facilitated.Peer ReviewedPostprint (author's final draft

    Adaptive breakwaters with inflatable elements for coastal protection. Preliminary numerical estimation of their performance

    Get PDF
    Excessive erosion of sand beaches defines a serious problem worldwide and is particularly pronounced in the Mediterranean region. One of the typical mea- sures for alleviating this erosion consists in building rigid breakwaters in the vicinity of the coast for diminishing sand transport. This solution, however, is often accompanied by undesirable alteration of the coastline. In this work we address the viability of using conceptually new structures with inflatable ele- ments striving to improve the control over the sediment transport. The aim of the inflatable element is to adapt the breakwater configuration to the sea state. A preliminary design is proposed and tested in a number of storm scenarios us- ing an in-house finite element/level set model. Influence of various breakwater design parameters upon its functionality is studied. Transmission coefficients and maximum pressures exerted upon the breakwater are estimated. Numer- ical study performed shows that for the considered range of storm scenarios the proposed design is characterized by transmission coefficients below 0.5. It is also shown that the use of inflatable elements facilitates adaptation of the breakwater functionality to a given sea state.Postprint (author's final draft

    Interactive inspection of complex multi-object industrial assemblies

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution鈥揵ased on the analysis of several existing view-dependent visualization schemes鈥搖ses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.Peer ReviewedPostprint (author's final draft

    Interactive GPU-based generation of solvent-excluded surfaces

    Get PDF
    The solvent-excluded surface (SES) is a popular molecular representation that gives the boundary of the molecular volume with respect to a specific solvent. SESs depict which areas of a molecule are accessible by a specific solvent, which is represented as a spherical probe. Despite the popularity of SESs, their generation is still a compute-intensive process, which is often performed in a preprocessing stage prior to the actual rendering (except for small models). For dynamic data or varying probe radii, however, such a preprocessing is not feasible as it prevents interactive visual analysis. Thus, we present a novel approach for the on-the-fly generation of SESs, a highly parallelizable, grid-based algorithm where the SES is rendered using ray-marching. By exploiting modern GPUs, we are able to rapidly generate SESs directly within the mapping stage of the visualization pipeline. Our algorithm can be applied to large time-varying molecules and is scalable, as it can progressively refine the SES if GPU capabilities are insufficient. In this paper, we show how our algorithm is realized and how smooth transitions are achieved during progressive refinement. We further show visual results obtained from real-world data and discuss the performance obtained, which improves upon previous techniques in both the size of the molecules that can be handled and the resulting frame rate.Peer ReviewedPostprint (author's final draft

    Learning human viewpoint preferences from sparsely annotated models

    Get PDF
    View quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks. Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy, than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capture human viewpoint preferences. Therefore, we introduce a large-scale crowdsourced data set, which contains 58k annotated viewpoints for 3220 ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences. We further demonstrate that this view quality measure not only generalizes to models unseen during training, but also to unseen model categories. We are thus able to predict view qualities for single images, and directly predict human preferred viewpoints for 3D models by exploiting point-based learning technology, without requiring to generate intermediate images or sampling the view sphere. We will detail our data collection procedure, describe the data analysis and model training and will evaluate the predictive quality of our trained viewpoint measure on unseen models and categories. To our knowledge, this is the first deep learning approach to predict a view quality measure solely based on human preferences.This work was supported in part by The Federal Ministry of Education and Research funding program - AuCity 3 - Kollaborative und adaptive Mixed Reality in der Hochschullehre am Beispiel des Bauingenieurwesens, between Magdeburg-Stendal University of Applied Sciences, the Bauhaus University Weimar and Ulm University. Open access funding enabled and organized by Projekt DEAL.Peer ReviewedPostprint (published version
    corecore